翻訳と辞書
Words near each other
・ Optik Software
・ Optilan
・ Optile
・ Optima
・ Optima (disambiguation)
・ Optima (grape)
・ Optima Bus Corporation
・ Optima Lake
・ Optima National Wildlife Refuge
・ Optima Telekom
・ Optima, Oklahoma
・ Optimal asymmetric encryption padding
・ Optimal binary search tree
・ Optimal capital income taxation
・ Optimal computing budget allocation
Optimal control
・ Optimal cutting temperature compound
・ Optimal decision
・ Optimal design
・ Optimal discriminant analysis
・ Optimal distinctiveness theory
・ Optimal Energy Joule
・ Optimal estimation
・ Optimal Flexible Architecture
・ Optimal foraging theory
・ Optimal maintenance
・ Optimal matching
・ Optimal mechanism
・ Optimal projection equations
・ Optimal rotation age


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Optimal control : ウィキペディア英語版
Optimal control
Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union〔L. S. Pontryagin, 1962. ''The Mathematical Theory of Optimal Processes''.〕 and Richard Bellman in the United States. Optimal control can be seen as a control strategy in control theory.
==General method==
Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost functional. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's Principle〔I. M. Ross, 2009. ''A Primer on Pontryagin's Principle in Optimal Control'', Collegiate Publishers. ISBN 978-0-9843571-0-9.〕), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).
We begin with a simple example. Consider a car traveling on a straight line through a hilly road. The question is, how should the driver press the accelerator pedal in order to ''minimize'' the total traveling time? Clearly in this example, the term ''control law'' refers specifically to the way in which the driver presses the accelerator and shifts the gears. The ''system'' consists of both the car and the road, and the ''optimality criterion'' is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.
A proper cost functional is a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. It is often the case that the constraints are interchangeable with the cost function.
Another optimal control problem is to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another control problem is to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.
A more abstract framework goes as follows. Minimize the continuous-time cost functional
:J=\Phi\,() + \int_^ \mathcal\,() \,\operatornamet
subject to the first-order dynamic constraints (the state equation)
: \dot\,(),
the algebraic ''path constraints''
: \textbf\,() \leq \textbf,
and the boundary conditions
:\boldsymbol\,() = 0
where \textbf(t) is the ''state'', \textbf(t) is the ''control'', t is the independent variable (generally speaking, time), t_0 is the initial time, and t_f is the terminal time. The terms \Phi and \mathcal are called the ''endpoint cost '' and ''Lagrangian'', respectively. Furthermore, it is noted that the path constraints are in general ''inequality'' constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution () to the optimal control problem is ''locally minimizing''.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Optimal control」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.